A hierarchical model of goal directed navigation selects trajectories in a visual environment.

نویسندگان

  • Uğur M Erdem
  • Michael J Milford
  • Michael E Hasselmo
چکیده

We have developed a Hierarchical Look-Ahead Trajectory Model (HiLAM) that incorporates the firing pattern of medial entorhinal grid cells in a planning circuit that includes interactions with hippocampus and prefrontal cortex. We show the model's flexibility in representing large real world environments using odometry information obtained from challenging video sequences. We acquire the visual data from a camera mounted on a small tele-operated vehicle. The camera has a panoramic field of view with its focal point approximately 5 cm above the ground level, similar to what would be expected from a rat's point of view. Using established algorithms for calculating perceptual speed from the apparent rate of visual change over time, we generate raw dead reckoning information which loses spatial fidelity over time due to error accumulation. We rectify the loss of fidelity by exploiting the loop-closure detection ability of a biologically inspired, robot navigation model termed RatSLAM. The rectified motion information serves as a velocity input to the HiLAM to encode the environment in the form of grid cell and place cell maps. Finally, we show goal directed path planning results of HiLAM in two different environments, an indoor square maze used in rodent experiments and an outdoor arena more than two orders of magnitude larger than the indoor maze. Together these results bridge for the first time the gap between higher fidelity bio-inspired navigation models (HiLAM) and more abstracted but highly functional bio-inspired robotic mapping systems (RatSLAM), and move from simulated environments into real-world studies in rodent-sized arenas and beyond.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

GPS Jamming Detection in UAV Navigation Using Visual Odometry and HOD Trajectory Descriptor

Auto-navigating of unmanned aerial vehicles (UAV) in the outdoor environment is performed by using the Global positioning system (GPS) receiver. The power of the GPS signal on the earth surface is very low. This can affect the performance of GPS receivers in the environments contaminated with the other source of radio frequency interference (RFI). GPS jamming and spoofing are the most serious a...

متن کامل

A biologically inspired hierarchical goal directed navigation model.

We propose an extended version of our previous goal directed navigation model based on forward planning of trajectories in a network of head direction cells, persistent spiking cells, grid cells, and place cells. In our original work the animat incrementally creates a place cell map by random exploration of a novel environment. After the exploration phase, the animat decides on its next movemen...

متن کامل

A goal-directed spatial navigation model using forward trajectory planning based on grid cells.

A goal-directed navigation model is proposed based on forward linear look-ahead probe of trajectories in a network of head direction cells, grid cells, place cells and prefrontal cortex (PFC) cells. The model allows selection of new goal-directed trajectories. In a novel environment, the virtual rat incrementally creates a map composed of place cells and PFC cells by random exploration. After e...

متن کامل

A New Method of Mobile Robot Navigation: Shortest Null Space

In this paper, a new method was proposed for the navigation of a mobile robot in an unknown dynamic environment. The robot could detect only a limited radius of its surrounding with its sensors and it went on the shortest null space (SNS) toward the goal. In the case of no obstacle, SNS was a direct path from the robot to goal; however, in the presence of obstacles, SNS was a space around the r...

متن کامل

Computational Properties of the Hippocampus Increase the Efficiency of Goal-Directed Foraging through Hierarchical Reinforcement Learning

The mammalian brain is thought to use a version of Model-based Reinforcement Learning (MBRL) to guide "goal-directed" behavior, wherein animals consider goals and make plans to acquire desired outcomes. However, conventional MBRL algorithms do not fully explain animals' ability to rapidly adapt to environmental changes, or learn multiple complex tasks. They also require extensive computation, s...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Neurobiology of learning and memory

دوره 117  شماره 

صفحات  -

تاریخ انتشار 2015